A.I. (De Kai)

Philosophers, politicians and populations have long wrestled with all the thorny trade-offs between different goals. Short-term instant gratification? Long-term happiness? Avoidance of extinction? Individual liberties? Collective good? Bounds on inequality? Equal opportunity? Degree of governance? Free speech? Safety from harmful speech? Allowable degree of manipulation? Tolerance of diversity? Permissible recklessness? Rights versus responsibilities? There’s no universal consensus on such goals, let alone on even more triggering issues like gun rights, reproductive rights or geopolitical conflicts. In fact, the OpenAI saga amply demonstrates how impossible it is to align goals among even a tiny handful of OpenAI leaders. How on earth can A.I. be aligned with all of humanity’s goals?